Optimality Driven Nearest Centroid Classification from Genomic Data
نویسندگان
چکیده
منابع مشابه
Optimality Driven Nearest Centroid Classification from Genomic Data
Nearest-centroid classifiers have recently been successfully employed in high-dimensional applications, such as in genomics. A necessary step when building a classifier for high-dimensional data is feature selection. Feature selection is frequently carried out by computing univariate scores for each feature individually, without consideration for how a subset of features performs as a whole. We...
متن کاملImproved Small Molecule Activity Determination via Centroid Nearest Neighbors Classification
Small molecules which alter biological processes or disease states are of significant interest. In-silico drug discovery commonly uses measures of structural similarity for identifying the “right” small molecule for a given task. Because explicit structure similarity determination is a very difficult task, modern chemoinformatics solutions typically use “quantitative structure-activity relation...
متن کاملNearest Shrunken Centroid as Feature Selection of Microarray Data
The nearest shrunken centroid classifier uses shrunken centroids as prototypes for each class and test samples are classified to belong to the class whose shrunken centroid is nearest to it. In our study, the nearest shrunken centroid classifier was used simply to select important genes prior to classification. Random Forest, a decision tree based classification algorithm, is chosen as a classi...
متن کاملBoosting nearest shrunken centroid classifier for microarray data
Nearest shrunken centroid classifier (NSC) is a class of linear classifiers with built-in feature selections, and has proven useful for analyzing microarray data. The simple linear structure of the classification boundary makes NSC easy to interpret and implement, but sometimes this simple structure might fail to generalize well for some data. In this paper we propose boosting NSC to improve it...
متن کاملLDA/SVM Driven Nearest Neighbor Classification
Nearest neighbor (NN) classification relies on the assumption that class conditional probabilities are locally constant. This assumption becomes false in high dimensions with finite samples due to the curse of dimensionality. The NN rule introduces severe bias under these conditions. We propose a locally adaptive neighborhood morphing classification method to try to minimize bias. We use local ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: PLoS ONE
سال: 2007
ISSN: 1932-6203
DOI: 10.1371/journal.pone.0001002